15 research outputs found

    A General Theory of (Identification in the) Limit and Convergence (to the Truth)

    Get PDF
    I propose a new definition of identification in the limit (also called convergence to the truth), as a new success criterion that is meant to complement, rather than replacing, the classic definition due to Gold (1967). The new definition is designed to explain how it is possible to have successful learning in a kind of scenario that Gold's classic account ignores---the kind of scenario in which the entire infinite data stream to be presented incrementally to the learner is not presupposed to completely determine the correct learning target. From a purely mathematical point of view, the new definition employs a convergence concept that generalizes net convergence and sits in between pointwise convergence and uniform convergence. Two results are proved to suggest that the new definition provides a success criterion that is by no means weak: (i) Between the new identification in the limit and Gold's classic one, neither implies the other. (ii) If a learning method identifies the correct target in the limit in the new sense, any U-shaped learning involved therein has to be redundant and can be removed while maintaining the new kind of identification in the limit. I conclude that we should have (at least) two success criteria that correspond to two senses of identification in the limit: the classic one and the one proposed here. They are complementary: meeting any one of the two is good; meeting both at the same time, if possible, is even better

    Belief Revision Theory

    Get PDF

    A General Theory of (Identification in the) Limit and Convergence (to the Truth)

    Get PDF
    I propose a new definition of identification in the limit (also called convergence to the truth), as a new success criterion that is meant to complement, rather than replacing, the classic definition due to Gold (1967). The new definition is designed to explain how it is possible to have successful learning in a kind of scenario that Gold's classic account ignores---the kind of scenario in which the entire infinite data stream to be presented incrementally to the learner is not presupposed to completely determine the correct learning target. From a purely mathematical point of view, the new definition employs a convergence concept that generalizes net convergence and sits in between pointwise convergence and uniform convergence. Two results are proved to suggest that the new definition provides a success criterion that is by no means weak: (i) Between the new identification in the limit and Gold's classic one, neither implies the other. (ii) If a learning method identifies the correct target in the limit in the new sense, any U-shaped learning involved therein has to be redundant and can be removed while maintaining the new kind of identification in the limit. I conclude that we should have (at least) two success criteria that correspond to two senses of identification in the limit: the classic one and the one proposed here. They are complementary: meeting any one of the two is good; meeting both at the same time, if possible, is even better

    Propositional Reasoning that Tracks Probabilistic Reasoning

    No full text
    Bayesians model one鈥檚 doxastic state by subjective probabilities. But in traditional epistemology, in logic-based artificial intelligence, and in everyday life, one鈥檚 doxastic state is usually expressed in a qualitative, binary way: either one accepts (believes) a proposition or one does not. What is the relationship between qualitative and probabilistic belief? I show that, besides the familiar lottery paradox (Kyburg 1961), there are two new, diachronic paradoxes that are more serious. A solution to the paradoxes, old and new, is provided by means of a new account of the relationship between qualitative and probabilistic belief. I propose that propositional beliefs should crudely but aptly represent one鈥檚 probabilistic credences. Aptness should include responses to new information so that propositional belief revision tracks Bayesian conditioning: if belief state B aptly represents degrees of belief p then the revised belief state K鈭桬 should aptly represent the conditional degrees of belief p(路|E). I explain how to characterize synchronic aptness and qualitative belief revision to ensure the tracking property in the sense just defined. I also show that the tracking property is impossible if acceptance is based on thresholds or if qualitative belief revision is based on the familiar AGM belief revision theory of Alchourr 虂on, G 虉ardenfors, and Makinson (1985)

    A Tale of Two Epistemologies?

    No full text
    So-called 'traditional epistemology' and 'Bayesian epistemology' share a word, but it may often seem that the enterprises hardly share a subject matter. They differ in their central concepts. They differ in their main concerns. They differ in their main theoretical moves. And they often differ in their methodology. However, in the last decade or so, there have been a number of attempts to build bridges between the two epistemologies. Indeed, many would say that there is just one branch of philosophy here - epistemology. There is a common subject matter after all. In this paper, we begin by playing the role of a 'bad cap', emphasizing many apparent points of disconnection, and even conflict, between the approaches to epistemology. We then switch role, playing a 'good cop' who insists that the approaches are engaged in common projects after all. We look at various ways in which the gaps between them have been bridged, and we consider the prospects for bridging them further. We conclude that this is an exciting time for epistemology, as the two traditions can learn, and have started learning, from each other
    corecore